Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Malware poses an increasing threat to critical computing infrastructure, driving demand for more advanced detection and analysis methods. Although raw-binary malware classifiers show promise, they are limited in their capabilities and struggle with the challenges of modeling long sequences. Meanwhile, the rise of large language models (LLMs) in natural language processing showcases the power of massive, self-supervised models trained on heterogeneous datasets, offering flexible representations for numerous downstream tasks. The success behind these models is rooted in the size and quality of their training data, the expressiveness and scalability of their neural architecture, and their ability to learn from unlabeled data in a self-supervised manner. In this work, we take the first steps toward developing large malware language models (LMLMs), the malware analog to LLMs. We tackle the core aspects of this objective, namely, questions about data, models, pretraining, and finetuning. By pretraining a malware classification model with language modeling objectives, we were able to improve downstream performance on diverse practical malware classification tasks on average by 1.1% and up to 28.6%, indicating that these models could serve to succeed raw-binary malware classifiers.more » « lessFree, publicly-accessible full text available February 24, 2027
-
Millions of new pieces of malicious software (i.e., malware) are introduced each year. This poses significant challenges for antivirus vendors, who use machine learning to detect and analyze malware, and must keep up with changes in the distribution while retaining knowledge of older variants. Continual learning (CL) holds the potential to address this challenge by relaxing the requirements of the incremental storage and computational costs of regularly retraining over all the collected data. Prior work, however, shows that CL techniques, which are designed primarily for computer vision tasks, fare poorly when applied to malware classification. To address these issues, we begin with an exploratory analysis of a typical malware dataset, which reveals that malware families are heterogeneous and difficult to characterize, requiring a wide variety of samples to learn a robust representation. Based on these findings, we propose Malware Analysis with Distribution-Aware Replay (MADAR), a CL framework that accounts for the unique properties and challenges of the malware data distribution. Through extensive evaluation on large-scale Windows and Android malware datasets, we show that MADAR significantly outperforms prior work. This highlights the importance of understanding domain characteristics when designing CL techniques and demonstrates a path forward for the malware analysis domain.more » « lessFree, publicly-accessible full text available October 22, 2026
-
The proliferation of online face images has heightened privacy concerns, as adversaries can exploit facial features for nefarious purposes. While adversarial perturbations have been proposed to safeguard these images, their effectiveness remains questionable. This paper introduces IVORY, a novel adversarial purification method leveraging Diffusion Transformer-based Stable Diffusion 3 model to purify perturbed images and improve facial feature extraction. Evaluated across gender recognition, ethnicity recognition and age group classification tasks with CNNs like VGG16, SENet and MobileNetV3 and vision transformers like SwinFace, IVORY consistently restores classifier performance to near-clean levels in white-box settings, outperforming traditional defenses such as Adversarial Training, DiffPure and IMPRESS. For example, it improved gender recognition accuracy from 37.8% to 96% under the PGD attack for VGG16 and age group classification accuracy from 2.1% to 52.4% under AutoAttack for MobileNetV3. In black-box scenarios, IVORY achieves a 22.8% average accuracy gain. IVORY also reduces SSIM noise by over 50% at 1x resolution and up to 80% at 2x resolution compared to DiffPure. Our analysis further reveals that adversarial perturbations alone do not fully protect against soft-biometric extraction, highlighting the need for comprehensive evaluation frameworks and robust defenses.more » « lessFree, publicly-accessible full text available May 26, 2026
-
Free, publicly-accessible full text available May 26, 2026
-
Free, publicly-accessible full text available June 1, 2026
-
Tor users derive anonymity in part from the size of the Tor user base, but Tor struggles to attract and support more users due to performance limitations. Previous works have proposed modifications to Tor’s path selection algorithm to enhance both performance and security, but many proposals have unintended consequences due to incorporating information related to client location. We instead propose selecting paths using a global view of the network, independent of client location, and we propose doing so with a machine learning classifier to predict the performance of a given path before building a circuit. We show through a variety of simulated and live experimental settings, across different time periods, that this approach can significantly improve performance compared to Tor’s default path selection algorithm and two previously proposed approaches. In addition to evaluating the security of our approach with traditional metrics, we propose a novel anonymity metric that captures information leakage resulting from location-aware path selection, and we show that our path selection approach leaks no more information than the default path selection algorithm.more » « lessFree, publicly-accessible full text available March 13, 2026
-
Free, publicly-accessible full text available March 3, 2026
-
Free, publicly-accessible full text available August 12, 2026
-
Free, publicly-accessible full text available June 30, 2026
An official website of the United States government
